12 research outputs found

    Towards a robust, effective and resource-efficient machine learning technique for IoT security monitoring.

    Get PDF
    Internet of Things (IoT) devices are becoming increasingly popular and an integral part of our everyday lives, making them a lucrative target for attackers. These devices require suitable security mechanisms that enable robust and effective detection of attacks. Machine learning (ML) and its subdivision Deep Learning (DL) methods offer a promise, but they can be computationally expensive in providing better detection for resource-constrained IoT devices. Therefore, this research proposes an optimization method to train ML and DL methods for effective and efficient security monitoring of IoT devices. It first investigates the feasibility of the Light Gradient Boosting Machine (LGBM) for attack detection in IoT environments, proposing an optimization procedure to obtain its effective counterparts. The trained LGBM can successfully discern attacks and regular traffic in various IoT benchmark datasets used in this research. As LGBM is a traditional ML technique, it may be difficult to learn complex network traffic patterns present in IoT datasets. Therefore, we further examine Deep Neural Networks (DNNs), proposing an effective and efficient DNN-based security solution for IoT security monitoring to leverage more resource savings and accurate attack detection. Investigation results are promising, as the proposed optimization method exploits the mini-batch gradient descent with simulated micro-batching in building effective and efficient DNN-based IoT security solutions. Following the success of DNN for effective and efficient attack detection, we further exploit it in the context of adversarial attack resistance. The resulting DNN is more resistant to adversarial samples than its benchmark counterparts and other conventional ML methods. To evaluate the effectiveness of our proposal, we considered on-device learning in federated learning settings, using decentralized edge devices to augment data privacy in resource-constrained environments. To this end, the performance of the method was evaluated against various realistic IoT datasets (e.g. NBaIoT, MNIST) on virtual and realistic testbed set-ups with GB-BXBT-2807 edge-computing-like devices. The experimental results show that the proposed method can reduce memory and time usage by 81% and 22% in the simulated environment of virtual workers compared to its benchmark counterpart. In the realistic testbed scenario, it saves 6% of memory footprints with a reduction of execution time by 15%, while maintaining a better and state-of-the-art accuracy

    Towards a robust, effective and resource efficient machine learning technique for IoT security monitoring.

    Get PDF
    The application of Deep Neural Networks (DNNs) for monitoring cyberattacks in Internet of Things (IoT) systems has gained significant attention in recent years. However, achieving optimal detection performance through DNN training has posed challenges due to computational intensity and vulnerability to adversarial samples. To address these issues, this paper introduces an optimization method that combines regularization and simulated micro-batching. This approach enables the training of DNNs in a robust, efficient, and resource-friendly manner for IoT security monitoring. Experimental results demonstrate that the proposed DNN model, including its performance in Federated Learning (FL) settings, exhibits improved attack detection and resistance to adversarial perturbations compared to benchmark baseline models and conventional Machine Learning (ML) methods typically employed in IoT security monitoring. Notably, the proposed method achieves significant reductions of 79.54% and 21.91% in memory and time usage, respectively, when compared to the benchmark baseline in simulated virtual worker environments. Moreover, in realistic testbed scenarios, the proposed method reduces memory footprint by 6.05% and execution time by 15.84%, while maintaining accuracy levels that are superior or comparable to state-of-the-art methods. These findings validate the feasibility and effectiveness of the proposed optimization method for enhancing the efficiency and robustness of DNN-based IoT security monitoring

    Resource efficient boosting method for IoT security monitoring.

    Get PDF
    Machine learning (ML) methods are widely proposed for security monitoring of Internet of Things (IoT). However, these methods can be computationally expensive for resource constraint IoT devices. This paper proposes an optimized resource efficient ML method that can detect various attacks on IoT devices. It utilizes Light Gradient Boosting Machine (LGBM). The performance of this approach was evaluated against four realistic IoT benchmark datasets. Experimental results show that the proposed method can effectively detect attacks on IoT devices with limited resources, and outperforms the state of the art techniques

    Memory efficient federated deep learning for intrusion detection in IoT networks.

    Get PDF
    Deep Neural Networks (DNNs) methods are widely proposed for cyber security monitoring. However, training DNNs requires a lot of computational resources. This restricts direct deployment of DNNs to resource-constrained environments like the Internet of Things (IoT), especially in federated learning settings that train an algorithm across multiple decentralized edge devices. Therefore, this paper proposes a memory efficient method of training a Fully Connected Neural Network (FCNN) for IoT security monitoring in federated learning settings. The model‘s performance was evaluated against eleven realistic IoT benchmark datasets. Experimental results show that the proposed method can reduce memory requirement by up to 99.46 percentage points when compared to its benchmark counterpart, while maintaining the state-of-the-art accuracy and F1 score

    Reducing computational cost in IoT cyber security: case study of artificial immune system algorithm.

    Get PDF
    Using Machine Learning (ML) for Internet of Things (IoT) security monitoring is a challenge. This is due to their resource constraint nature that limits the deployment of resource-hungry monitoring algorithms. Therefore, the aim of this paper is to investigate resource consumption reduction of ML algorithms in IoT security monitoring. This paper starts with an empirical analysis of resource consumption of Artificial Immune System (AIS) algorithm, and then employs carefully selected feature reduction techniques to reduce the computational cost of running the algorithm. The proposed approach significantly reduces computational cost as illustrated in the paper. We validate our results using two benchmarks and one purposefully simulated data set

    Humoral immunological kinetics of severe acute respiratory syndrome coronavirus 2 infection and diagnostic performance of serological assays for coronavirus disease 2019: an analysis of global reports

    Get PDF
    As the coronavirus disease 2019 (COVID-19) pandemic continues to rise and second waves are reported in some countries, serological test kits and strips are being considered to scale up an adequate laboratory response. This study provides an update on the kinetics of humoral immune response to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and performance characteristics of serological protocols (lateral flow assay [LFA], chemiluminescence immunoassay [CLIA] and ELISA) used for evaluations of recent and past SARS-CoV-2 infection. A thorough and comprehensive review of suitable and eligible full-text articles was performed on PubMed, Scopus, Web of Science, Wordometer and medRxiv from 10 January to 16 July 2020. These articles were searched using the Medical Subject Headings terms 'COVID-19', 'Serological assay', 'Laboratory Diagnosis', 'Performance characteristics', 'POCT', 'LFA', 'CLIA', 'ELISA' and 'SARS-CoV-2'. Data from original research articles on SARS-CoV-2 antibody detection >= second day postinfection were included in this study. In total, there were 7938 published articles on humoral immune response and laboratory diagnosis of COVID-19. Of these, 74 were included in this study. The detection, peak and decline period of blood anti-SARS-CoV-2 IgM, IgG and total antibodies for point-of-care testing (POCT), ELISA and CLIA vary widely. The most promising of these assays for POCT detected anti-SARS-CoV-2 at day 3 postinfection and peaked on the 15th day; ELISA products detected anti-SARS-CoV-2 IgM and IgG at days 2 and 6 then peaked on the eighth day; and the most promising CLIA product detected anti-SARS-CoV-2 at day 1 and peaked on the 30th day. The most promising LFA, ELISA and CLIA that had the best performance characteristics were those targeting total SARS-CoV-2 antibodies followed by those targeting anti-SARS-CoV-2 IgG then IgM. Essentially, the CLIA-based SARS-CoV-2 tests had the best performance characteristics, followed by ELISA then POCT. Given the varied performance characteristics of all the serological assays, there is a need to continuously improve their detection thresholds, as well as to monitor and re-evaluate their performances to assure their significance and applicability for COVID-19 clinical and epidemiological purposes

    Resource efficient federated deep learning for IoT security monitoring.

    No full text
    Federated Learning (FL) uses a distributed Machine Learning (ML) concept to build a global model using multiple local models trained on distributed edge devices. A disadvantage of the FL paradigm is the requirement of many communication rounds before model convergence. As a result, there is a challenge for running on-device FL with resource-hungry algorithms such as Deep Neural Network (DNN), especially in the resource-constrained Internet of Things (IoT) environments for security monitoring. To address this issue, this paper proposes Resource Efficient Federated Deep Learning (REFDL) method. Our method exploits and optimizes Federated Averaging (Fed-Avg) DNN based technique to reduce computational resources consumption for IoT security monitoring. It utilizes pruning and simulated micro-batching in optimizing the Fed-Avg DNN for effective and efficient IoT attacks detection at distributed edge nodes. The performance was evaluated using various realistic IoT and non-IoT benchmark datasets on virtual and testbed environments build with GB-BXBT-2807 edge-computing-like devices. The experimental results show that the proposed method can reduce memory usage by 81% in the simulated environment of virtual workers compared to its benchmark counterpart. In the realistic testbed scenario, it saves 6% memory while reducing execution time by 15% without degrading accuracy

    Robust, effective and resource efficient deep neural network for intrusion detection in IoT networks.

    Get PDF
    Internet of Things (IoT) devices are becoming increasingly popular and an integral part of our everyday lives, making them a lucrative target for attackers. These devices require suitable security mechanisms that enable robust and effective detection of attacks. Deep Neural Networks (DNNs) offer a promise, but they require large amounts of computational resources to provide better detection, and their detection capabilities can be exploited by adversarial attacks. Therefore, this paper proposes a method to train Fully Connected Neural Network (FCNN) for IoT security monitoring in a robust, effective and resource-efficient way. The resulting model is assessed against various benchmark datasets created using commercial IoT devices, such as doorbells, security cameras, and thermostats. Experimental results demonstrate the model's ability to maintain state-of-the-art accuracy and F1-score while reducing training memory and time consumption by 99.99 and 99.80 percentage points than its benchmark counterpart
    corecore